Goto

Collaborating Authors

 stance leg


Deriving Rewards for Reinforcement Learning from Symbolic Behaviour Descriptions of Bipedal Walking

Harnack, Daniel, Lüth, Christoph, Gross, Lukas, Kumar, Shivesh, Kirchner, Frank

arXiv.org Artificial Intelligence

Generating physical movement behaviours from their symbolic description is a long-standing challenge in artificial intelligence (AI) and robotics, requiring insights into numerical optimization methods as well as into formalizations from symbolic AI and reasoning. In this paper, a novel approach to finding a reward function from a symbolic description is proposed. The intended system behaviour is modelled as a hybrid automaton, which reduces the system state space to allow more efficient reinforcement learning. The approach is applied to bipedal walking, by modelling the walking robot as a hybrid automaton over state space orthants, and used with the compass walker to derive a reward that incentivizes following the hybrid automaton cycle. As a result, training times of reinforcement learning controllers are reduced while final walking speed is increased. The approach can serve as a blueprint how to generate reward functions from symbolic AI and reasoning.


Hierarchical control strategy for planar bipedal walking robots based on reduced order model

Vu, Minh Nhat

arXiv.org Artificial Intelligence

In this work, the hierarchical control strategy of template-based control for a bipedal robot is described. The axial force of a compliant leg is redirected to a point, called the virtual pivot point (VPP), of a 2D biped robot, which is located above the CoM of the model, to generate a restoring moment for the trunk motion. The resulting behavior of the model would resemble a virtual pendulum rotating around this VPP, thus aiming for an upright trunk during walking. Inspired by this analysis, we propose a new force redirecting method as a controller for robot walking. Then, these key features of the BTSLIP model with a simple force direction controller are mapped into the overall input torques of an articulated body robot via a task space controller. We consider a full dynamic simulation of a 2D articulated body robot to validate the performance of the proposed method under the random initial conditions and the presence of force disturbances and moderately rough surfaces. Moreover, with our control strategy, the robot achieves a stable walking motion while keeping its upper body upright without using optimization methods. We hypothesize by taking the advantage of the properties of mechanical templates, also called the reduced-order model, this could enable stable gait for the full model robot without the need for precise path planning.